Applying Unsupervised Learning and Action Selection to Robot Teleoperation
نویسنده
چکیده
Unsupervised learning and supervised remote teleoperator control for robots may seem an unlikely combination. This paper argues that the combination holds advantages for both parties. The operator would like to “instruct” the robot without any special effort, and then be able to hand over some or all of the tasks to be performed without loss of overall supervisory control. In return, the learning algorithm receives a continuous stream of exemplar data relevant to the tasks it might later be asked to perform. We consider an unsupervised learning method, the Dynamic Expectancy Model, and a teleoperator “architecture” offering just such a serendipitous combination.
منابع مشابه
An Unsupervised Learning Method for an Attacker Agent in Robot Soccer Competitions Based on the Kohonen Neural Network
RoboCup competition as a great test-bed, has turned to a worldwide popular domains in recent years. The main object of such competitions is to deal with complex behavior of systems whichconsist of multiple autonomous agents. The rich experience of human soccer player can be used as a valuable reference for a robot soccer player. However, because of the differences between real and simulated soc...
متن کاملA Q-learning Based Continuous Tuning of Fuzzy Wall Tracking
A simple easy to implement algorithm is proposed to address wall tracking task of an autonomous robot. The robot should navigate in unknown environments, find the nearest wall, and track it solely based on locally sensed data. The proposed method benefits from coupling fuzzy logic and Q-learning to meet requirements of autonomous navigations. Fuzzy if-then rules provide a reliable decision maki...
متن کاملBilateral Teleoperation Systems Using Backtracking Search optimization Algorithm Based Iterative Learning Control
This paper deals with the application of Iterative Learning Control (ILC) to further improve the performance of teleoperation systems based on Smith predictor. The goal is to achieve robust stability and optimal transparency for these systems. The proposed control structure make the slave manipulator follow the master in spite of uncertainties in time delay in communication channel and model pa...
متن کاملHippocampal Spatial Model for State Space Representation in Robotic Reinforcement Learning
We study reinforcement learning for cognitive navigation. The state space representation is constructed by unsupervised learning during exploration. As a result of learning, a stable representation of the continuous two-dimensional manifold in the high-dimensional input space is found. The representation consists of a population of localized overlapping place fields. This statespace coding is a...
متن کاملNovel Interaction Strategies for Learning from Teleoperation
The field of robot Learning from Demonstration (LfD) makes use of several input modalities for demonstrations (teleoperation, kinesthetic teaching, markerand vision-based motion tracking). In this paper we present two experiments aimed at identifying and overcoming challenges associated with using teleoperation as an input modality for LfD. Our first experiment compares kinesthetic teaching and...
متن کامل